20 research outputs found

    IFRS9 Expected Credit Loss Estimation: Advanced Models for Estimating Portfolio Loss and Weighting Scenario Losses

    Get PDF
    Estimation of portfolio expected credit loss is required for IFRS9 regulatory purposes. It starts with the estimation of scenario loss at loan level, and then aggregated and summed up by scenario probability weights to obtain portfolio expected loss. This estimated loss can vary significantly, depending on the levels of loss severity generated by the IFSR9 models, and the probability weights chosen. There is a need for a quantitative approach for determining the weights for scenario losses. In this paper, we propose a model to estimate the expected portfolio losses brought by recession risk, and a quantitative approach for determining the scenario weights. The model and approach are validated by an empirical example, where we stress portfolio expected loss by recession risk, and calculate the scenario weights accordingly

    IFRS9 Expected Credit Loss Estimation: Advanced Models for Estimating Portfolio Loss and Weighting Scenario Losses

    Get PDF
    Estimation of portfolio expected credit loss is required for IFRS9 regulatory purposes. It starts with the estimation of scenario loss at loan level, and then aggregated and summed up by scenario probability weights to obtain portfolio expected loss. This estimated loss can vary significantly, depending on the levels of loss severity generated by the IFSR9 models, and the probability weights chosen. There is a need for a quantitative approach for determining the weights for scenario losses. In this paper, we propose a model to estimate the expected portfolio losses brought by recession risk, and a quantitative approach for determining the scenario weights. The model and approach are validated by an empirical example, where we stress portfolio expected loss by recession risk, and calculate the scenario weights accordingly

    Deeply‐Recursive Attention Network for video steganography

    No full text
    Abstract Video steganography plays an important role in secret communication that conceals a secret video in a cover video by perturbing the value of pixels in the cover frames. Imperceptibility is the first and foremost requirement of any steganographic approach. Inspired by the fact that human eyes perceive pixel perturbation differently in different video areas, a novel effective and efficient Deeply‐Recursive Attention Network (DRANet) for video steganography to find suitable areas for information hiding via modelling spatio‐temporal attention is proposed. The DRANet mainly contains two important components, a Non‐Local Self‐Attention (NLSA) block and a Non‐Local Co‐Attention (NLCA) block. Specifically, the NLSA block can select the cover frame areas which are suitable for hiding by computing the correlations among inter‐ and intra‐cover frames. The NLCA block aims to effectively produce the enhanced representations of the secret frames to enhance the robustness of the model and alleviate the influence of different areas in the secret video. Furthermore, the DRANet reduces the model parameters by performing similar operations on the different frames within an input video recursively. Experimental results show the proposed DRANet achieves better performance with fewer parameters than the state‐of‐the‐art competitors

    3D printing of thermosets with diverse rheological and functional applicabilities

    No full text
    ‘Thermosets are ubiquitous but existing manufacturing of thermosets involves either a prolonged manufacturing cycle, low geometric complexity, or limited processable materials. Here, the authors report an in situ dual heating strategy for the rapid 3D printing of thermosets with complex structures and diverse rheological properties by incorporating direct ink writing (DIW) technique and a heating-accelerated in situ gelation mechanis

    Shape morphing of plastic films

    No full text
    Three-dimensional (3D) architectures have qualitatively expanded the functions of materials and flexible electronics. However, current fabrication techniques for devices constrain their substrates to 2D geometries and current post-shape transformation strategies are limited to heterogenous or responsive materials and are not amenable to free-standing inert plastic films such as polyethylene terephthalate (PET) and polyimide (PI), which are vital substrates for flexible electronics. Here, we realize the shape morphing of homogeneous plastic films for various free-standing 3D frameworks from their 2D precursors by introducing a general strategy based on programming the plastic strain in films under peeling. By modulating the peeling parameters, previously inaccessible free-standing 3D geometries ranging from millimeter to micrometer were predicted theoretically and obtained experimentally. This strategy is applicable to most materials capable of plastic deformation, including polymers, metals, and composite materials, and can even enable 4D transformation with responsive plastic films. Enhanced performance of 3D circuits and piezoelectric systems demonstrates the enormous potential of peeling-induced shape morphing for 3D devices.Agency for Science, Technology and Research (A*STAR)Ministry of Education (MOE)National Research Foundation (NRF)Published versionX.C. acknowledges the support from the National Research Foundation (NRF), Prime Minister’s Office, Singapore, under its Campus of Research Excellence and Technological Enterprise (CREATE), the Smart Grippers for Soft Robotics (SGSR) Programme, the Agency for Science, Technology and Research (A*STAR) Advanced Manufacturing and Engineering (AME) Programmatic Grant (No. A18A1b0045). S.W. and X.C. acknowledge the support from the International Partnership Program of Chinese Academy of Sciences (Grant No. 1A1111KYSB20200010). D.L. and H.G. acknowledge support from the Singapore Ministry of Educaton (MOE) AcRF Tier 1 (Grant RG120/21). H.G. acknowledges support as a Distinguished University Professorship from Nanyang Technological University and Scientific Directorship at Institute of High Performance Computing from the Agency for Science, Technology and Research (A*STAR)
    corecore